At this point in the term, we’ll be deviating in our code from McElreath. His course is taught entirely using rethinking, which is a pedogigical tool. It has clear mapping between mathematical models and syntax. But it lacks flexibility and has fewer modeling options.
On the other hand, there is a package called brms that also does Bayesian modeling. This package uses syntax simliar to lme4 (if you’ve used that), supports a wider range of distributions, integrates with the tidyverse ecosystem (if you’ve used that), has more extensive documentation, is more actively maintained, is more widely used (i.e., more support), and is more suitable for complex models.
You’re welcome to use the rethinking package when it suits you, in this course and in your research, but my goal is to introduce you to the brms package. Instead of reviewing the code from McElreath’s lecture today, we’ll be revisiting some familiar models using brms.
model specification
Let’s return to the height and weight data.
data(Howell1, package ="rethinking")d <- Howell1library(measurements)d$height <-conv_unit(d$height, from ="cm", to ="feet")d$weight <-conv_unit(d$weight, from ="kg", to ="lbs")describe(d, fast = T)
vars n mean sd median min max range skew kurtosis se
height 1 544 4.54 0.91 4.88 1.77 5.88 4.1 -1.26 0.58 0.04
weight 2 544 78.51 32.45 88.31 9.37 138.87 129.5 -0.54 -0.94 1.39
age 3 544 29.34 20.75 27.00 0.00 88.00 88.0 0.49 -0.56 0.89
male 4 544 0.47 0.50 0.00 0.00 1.00 1.0 0.11 -1.99 0.02
d <- d[d$age >=18, ]d$height_c <- d$height -mean(d$height)
m42.1<-brm(data = d, family = gaussian, weight ~1+ height_c,prior =c( prior( normal(130,20), class = Intercept),prior( normal(0,25), class = b),prior( uniform(0,50), class = sigma, ub =50) ), iter =5000, warmup =1000, chains =4, seed =3, file =here("fits/m42.1"))
brm() is the core function for fitting Bayesian models using brms.
m42.1<-brm(data = d, family = gaussian, weight ~1+ height_c,prior =c( prior( normal(130,20), class = Intercept),prior( normal(0,25), class = b),prior( uniform(0,50), class = sigma, ub =50) ), iter =5000, warmup =1000, chains =4,seed =3, file =here("fits/m42.1"))
family specifies the distribution of the outcome family. In many examples, we’ll use a gaussian (normal) distribution. But there are many many many options for this.
m42.1<-brm(data = d, family = gaussian, weight ~1+ height_c,prior =c( prior( normal(130,20), class = Intercept),prior( normal(0,25), class = b),prior( uniform(0,50), class = sigma, ub =50) ), iter =5000, warmup =1000, chains =4,seed =3, file =here("fits/m42.1"))
The formula argument is what you would expect from the lm() and lmer() functions you have seen in the past. The benefit of brms is that this formula can easily handle complex and non-linear terms. We’ll be playing with more in future classes.
m42.1<-brm(data = d, family = gaussian, weight ~1+ height_c,prior =c( prior( normal(130,20), class = Intercept),prior( normal(0,25), class = b),prior( uniform(0,50), class = sigma, ub =50) ), iter =5000, warmup =1000, chains =4,seed =3, file =here("fits/m42.1"))
Here we set our priors. Class b refers to population-level slope parameters (sometimes called fixed effects). Again, this argument has the ability to become very detailed, specific, and flexible, and we’ll play more with this.
m42.1<-brm(data = d, family = gaussian, weight ~1+ height_c,prior =c( prior( normal(130,20), class = Intercept),prior( normal(0,25), class = b),prior( uniform(0,50), class = sigma, ub =50) ), iter =5000, warmup =1000, chains =4,seed =3, file =here("fits/m42.1"))
Hamiltonian MCMC runs for a set number of iterations, throws away the first bit (the warmup), and does that up multiple times (the number of chains).
m42.1<-brm(data = d, family = gaussian, weight ~1+ height_c,prior =c( prior( normal(130,20), class = Intercept),prior( normal(0,25), class = b),prior( uniform(0,50), class = sigma, ub =50) ), iter =5000, warmup =1000, chains =4,seed =3, file =here("fits/m42.1"))
Remember, these are random walks through parameter space, so set a seed for reproducbility. Also, these can take a while to run, especially when you are developing more complex models. If you specify a file, the output of the model will automatically be saved. Even better, then next time you run this code, R will check for that file and load it into your workspace instead of re-running the model. (Just be sure to delete the model file if you make changes to any other part of the code.)
summary(m42.1)
Family: gaussian
Links: mu = identity; sigma = identity
Formula: weight ~ 1 + height_c
Data: d (Number of observations: 352)
Draws: 4 chains, each with iter = 5000; warmup = 1000; thin = 1;
total post-warmup draws = 16000
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 99.21 0.50 98.23 100.18 1.00 14807 12164
height_c 42.05 1.95 38.22 45.91 1.00 15921 12552
Further Distributional Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 9.38 0.36 8.72 10.12 1.00 15626 12366
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
plot(m42.1)
checking your model
Before we start to interpret our model, we should evaluate whether it does a good job. Posterior predictive checks plot the implied distribution of your outcome next to your actual distribution. If your posterior predictive values are off, your model is off.
pp_check(m42.1)
checking your model
We can also look at the posterior distributions of our chains. Remember, they should be covering most of the same space, so these distributions should pretty much overlap.
mcmc_plot(m42.1, type ="dens_overlay")
Let’s sample from the posterior. First, get_variables() will tell us everything at our disposal.
Let’s focus on just the parameters we’ve estimated. In prior lectures, we’ve drawn samples from the posterior distribution to generate plots and provide summaries. We can use the spread_draws() function to do so.
p42.1%>%ggplot(aes(x = b_Intercept)) +geom_density(fill ="#1c5253", color ="white") +labs(title ="Posterior probability",x ="probabilty of intercept (mean weight)" ) +scale_y_continuous(NULL, breaks =NULL)
Finally, we might want to plot the bivariate distributions of our parameters.
pairs(m42.1)
If we were encountering this problem for the first time, we would want to work on on our priors. These ones are pretty bad. We have a few tools available to help us define and test our priors.
First, let’s view the available priors for our model:
get_prior(formula = weight ~1+ height_c,data = d)
prior class coef group resp dpar nlpar lb ub
(flat) b
(flat) b height_c
student_t(3, 98.7, 14.8) Intercept
student_t(3, 0, 14.8) sigma 0
source
default
(vectorized)
default
default
If you’re ever not sure what coefficients to put priors on, this function can help with that.
Let’s refit our model with our earlier priors. Before we fit this to data, we’ll start by only sampling from our priors.
m42.1p <-brm(data = d, family = gaussian, weight ~1+ height_c,prior =c( prior( normal(130,20), class = Intercept),prior( normal(0,25), class = b),prior( uniform(0,50), class = sigma, ub =50) ), iter =5000, warmup =1000,seed =3, sample_prior ="only")
The output of spread_draws will now draw from samples from the prior, not samples from the posterior.
We’ll plot the regression lines from the priors against the real data, to see if they make sense.
Code
labels =seq(4, 6, by = .5)breaks = labels -mean(d$height)d %>%ggplot(aes(x = height_c, y = weight)) +geom_blank()+geom_abline(aes( intercept=b_Intercept, slope=b_height_c), data = p42.1p[1:50, ], #first 50 draws onlycolor ="#1c5253",alpha = .3) +scale_x_continuous("height(feet)", breaks = breaks, labels = labels) +scale_y_continuous("weight(lbs)", limits =c(50,150))
Let’s see if we can improve upon this model. One thing we know for sure is that the relationship between height and weight is positive. We may not know the exact magnitude, but we can use a distribution that doesn’t go below zero. We’ve already discussed uniform distributions, but those are pretty uninformative – they won’t do a good job regularizing – and we can also run into trouble if our bounds are not inclusive enough.
The log-normal distribution would be a good option here.
m42.2<-brm(data = d, family = gaussian, weight ~ height_c,prior =c( prior( normal(130,20), class = Intercept),prior( lognormal(1,2), class = b),prior( uniform(0,50), class = sigma, ub =50) ), iter =5000, warmup =1000,seed =3,file =here("fits/m42.2"))
summary(m42.2)
Family: gaussian
Links: mu = identity; sigma = identity
Formula: weight ~ height_c
Data: d (Number of observations: 352)
Draws: 4 chains, each with iter = 5000; warmup = 1000; thin = 1;
total post-warmup draws = 16000
Regression Coefficients:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 99.20 0.50 98.22 100.18 1.00 13657 11564
height_c 42.16 2.01 38.26 46.07 1.00 17384 12434
Further Distributional Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 9.39 0.36 8.72 10.13 1.00 15405 11446
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Let’s return to the tidybayes functions for summaries. As a reminder, we already saw spread_draws()
Let’s practice what we’ve learned using a different dataset. We’ll use the built-in msleep dataset from the ggplot2 package, which contains sleep data for various mammals.
data("msleep")d_sleep <- msleep %>%drop_na(sleep_total, bodywt) %>%mutate(log_weight =log(bodywt))# Quick look at the datadescribe(d_sleep[c("sleep_total", "bodywt", "log_weight")], fast = T)
vars n mean sd median min max range skew kurtosis se
sleep_total 1 83 10.43 4.45 10.10 1.9 19.9 18.0 0.05 -0.71 0.49
bodywt 2 83 166.14 786.84 1.67 0.0 6654.0 6654.0 7.10 53.72 86.37
log_weight 3 83 0.84 3.26 0.51 -5.3 8.8 14.1 0.30 -0.76 0.36
Our goal is to model the relationship between body weight and total sleep duration. Because body weight is highly skewed, we’ll use the log-transformed weight.
exercise
Let’s set up a model to predict sleep duration from log body weight:
We can make two kinds of predictions based on our model. First, we can get a posterior predictive distribution using add_predicted_draws():
# simulate new dataheight_c =sample(d$height_c, size =1e2, replace = T)# get predictionspredictions =data.frame(height_c) %>%add_predicted_draws(m42.2, seed =1)dim(predictions)
[1] 1600000 6
head(predictions)
# A tibble: 6 × 6
# Groups: height_c, .row [1]
height_c .row .chain .iteration .draw .prediction
<dbl> <int> <int> <int> <int> <dbl>
1 -0.301 1 NA NA 1 81.5
2 -0.301 1 NA NA 2 88.5
3 -0.301 1 NA NA 3 78.7
4 -0.301 1 NA NA 4 103.
5 -0.301 1 NA NA 5 89.1
6 -0.301 1 NA NA 6 78.2
Or, we can get expected values using add_epred_draws():
# get expected valuesexpected_vals =data.frame(height_c) %>%add_epred_draws(m42.2, seed =1)dim(expected_vals)
[1] 1600000 6
head(expected_vals)
# A tibble: 6 × 6
# Groups: height_c, .row [1]
height_c .row .chain .iteration .draw .epred
<dbl> <int> <int> <int> <int> <dbl>
1 -0.301 1 NA NA 1 87.4
2 -0.301 1 NA NA 2 86.8
3 -0.301 1 NA NA 3 86.4
4 -0.301 1 NA NA 4 87.4
5 -0.301 1 NA NA 5 85.9
6 -0.301 1 NA NA 6 85.7
predictions %>%full_join(expected_vals) %>%pivot_longer(c(.prediction, .epred)) %>%ggplot(aes(x=value, group = name)) +geom_density(aes(fill=name), alpha=.5)
# A tibble: 6 × 6
# Groups: log_weight, .row [1]
log_weight .row .chain .iteration .draw .prediction
<dbl> <int> <int> <int> <int> <dbl>
1 1.92 1 NA NA 1 7.46
2 1.92 1 NA NA 2 10.4
3 1.92 1 NA NA 3 6.34
4 1.92 1 NA NA 4 14.6
5 1.92 1 NA NA 5 11.4
6 1.92 1 NA NA 6 6.43
predictions %>%ggplot(aes(x = .prediction)) +geom_density(aes(x = sleep_total), data = msleep) +geom_histogram(aes(y = ..density..), fill ="#1c5253", color ="white", alpha = .3)
model fit and comparisons
If you want to get the pareto-smoothed importance sampling:
loo1 <-loo(m42.2, save_psis = T)loo1
Computed from 16000 by 352 log-likelihood matrix.
Estimate SE
elpd_loo -1288.5 14.0
p_loo 3.2 0.4
looic 2577.1 27.9
------
MCSE of elpd_loo is 0.0.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.8, 1.1]).
All Pareto k estimates are good (k < 0.7).
See help('pareto-k-diagnostic') for details.
And for the widely applicable information criteria:
waic(m42.2)
Computed from 16000 by 352 log-likelihood matrix.
Estimate SE
elpd_waic -1288.5 14.0
p_waic 3.2 0.4
waic 2577.1 27.9
Remember, these are primarily used to compare multiple models. See the loo package for more functions to help you compare models and identify influential data points.
bonus
Try adding a new predictor to your model (e.g., vore - what type of food the animal eats). How does this change your predictions?